total variation class
Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers
We consider the problem of estimating a function defined over $n$ locations on a $d$-dimensional grid (having all side lengths equal to $n^{1/d}$). When the function is constrained to have discrete total variation bounded by $C_n$, we derive the minimax optimal (squared) $\ell_2$ estimation error rate, parametrized by $n, C_n$. Total variation denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural question is: can these simpler estimators perform just as well?
Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers
We consider the problem of estimating a function defined over n locations on a d -dimensional grid (having all side lengths equal to n {1/d}). When the function is constrained to have discrete total variation bounded by C_n, we derive the minimax optimal (squared) \ell_2 estimation error rate, parametrized by n, C_n . Total variation denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural question is: can these simpler estimators perform just as well?
Reviews: Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers
This paper includes several results regarding the problem of function denoising over a grid of dimension 1. In particular, demonstrating that simple linear methods are not sufficient for denoising functions of bounded variation and that TV denoisers are minimax optimal (up to a log factor). Overall, I feel these are solid contributions which should be of interest to the mathematical statistics, machine learning and signal processing communities. My main criticism is that the simulations shown in Figure 3 of the main text and Figure 1 of the supplementary use rather artificial signals. The paper would be much stronger if simulations on real data sets would be performed (such as 2D images, 3D volumetric scans, etc.) and see real-world performance of TV denoising vs. Laplacian smoothing and Laplacian eigenmaps.
Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers
Sadhanala, Veeranjaneyulu, Wang, Yu-Xiang, Tibshirani, Ryan J.
We consider the problem of estimating a function defined over $n$ locations on a $d$-dimensional grid (having all side lengths equal to $n {1/d}$). When the function is constrained to have discrete total variation bounded by $C_n$, we derive the minimax optimal (squared) $\ell_2$ estimation error rate, parametrized by $n, C_n$. Total variation denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural question is: can these simpler estimators perform just as well?